4 - Classifying Environments [ID:21869]
50 von 70 angezeigt

We've talked about agents and rationality and we've talked about this Pease model. The

next thing to do is to look at environments and what kind of agents or general agent

architecture these need. The first thing to do is to look at what environments can be

like. There's essentially a couple of dimensions which we can classify them by. We're going to

call an environment fully observable if you actually can know all the information, the

agent actually has access to all the information about this environment. We'll call an environment

partially observable if you have some information about it. There are environments where you

have no information about it. Think of wandering around in the fog having amnesia. It gives

you very very very little information. Usually this semester we're going to look at fully

observable environments because that's easy and the world is actually not fully observable.

We will call an environment deterministic if I can actually compute the next state of

the environment from my knowledge of the current environment. Single player states, solitaire

is a deterministic setting. You do something and it's clear what the next state is. Good.

Episodic is a little bit like simple computer games or board games which means you actually

can have these episodes where nothing else happens. If you're playing chess and it's

your move then you move and the others don't. They might think and get something out of

their thinking and that's fine but they're actually not doing anything. The real world

of course is not episodic. While you're hearing AI, life goes on. We don't stop the world

only that we can actually have AI here. Fortunately the men's cooks are actually preparing the

meal and all of those kinds of things. Non-episodic can also have a very good effect.

The world of course is not episodic. We call it sequential. We call an environment dynamic

if actually changes can occur without the agents making changes. Think about solitaire

again. Nothing changes unless you put a card down somewhere. If you have solitaire with

wind, think about actual cards or little brothers this size, then the world can change. Those

are dynamic worlds otherwise we have static worlds. Static worlds of course are the easy

stuff so that's what we're going to look at first and next semester we're going to look

at dynamic worlds. If the world is too dynamic there's very little we can do. Most of the

world including the real world in many situations where we feel comfortable in is only semi-dynamic.

You're assuming that when the AI lecture is over there will still be a men's on. You will

get something to eat there which you don't know. It might have changed, burnt down or

whatever. Being intelligent in a completely dynamic or very dynamic world is very difficult

so you really want to have something like a semi-dynamic world. There are some things

that are kind of static. The university being around next year as well and so on. We humans

tend to get scared pretty easily if things become too dynamic. Think of an earthquake

or something like this. We'll call an environment discrete if we can describe it as having a

countable set of states. Board games, very important example of that. There's only a

finite number even though very big of possible configurations of a chessboard. Whereas the

problem of moving a robot arm or throwing a microphone you don't have a discrete set

of states. Many robots do or old robots did have that and they could only move in discrete

intervals which actually makes the problems much nicer and easier. Continuous problems

which is of course any real world problem are much harder. Rather than being able to

say count things you have to do calculus. For a robot arm you actually need to differentiate

and look at all these curves in R3 and make sure that the third derivative doesn't get

too big because that's I think the second one. Of course we speak of single agent environments

if there's only one environment that acts on it. Chess is a two agent problem. Soccer

is a probably 23 agent problem and AI lectures are something like 50 agent problems at the

moment. The upshot here is that depending on what your environment is it becomes more

or less difficult for your agent to actually act rationally in it. There's always the

easy things right. Those are the ones we're going to cover this semester with few exceptions

and we're going to work our way towards the harder ones. Could somebody grab this in between?

Teil eines Kapitels:
Rational Agents: a Unifying Framework for Artificial Intelligence

Zugänglich über

Offener Zugang

Dauer

00:11:55 Min

Aufnahmedatum

2020-10-26

Hochgeladen am

2020-10-26 14:16:59

Sprache

en-US

Different environment classifications and examples.

Einbetten
Wordpress FAU Plugin
iFrame
Teilen